Systematic Reviews
○ Springer Science and Business Media LLC
Preprints posted in the last 30 days, ranked by how well they match Systematic Reviews's content profile, based on 11 papers previously published here. The average preprint has a 0.08% match score for this journal, so anything above that is already an above-average fit.
Meng, G.; Chen, Y.; Dai, M.; Tang, S.; Chen, Q.
Show abstract
AbstractsO_ST_ABSBackgroundC_ST_ABSSelf-management is essential for stroke survivors to maintain a healthy lifestyle and reduce recurrence risk. Although theory-based self-management interventions are widely recommended, the theoretical frameworks underpinning them and their comparative effectiveness remain unclear. AimsTo systematically identify the theories, models, and frameworks (TMFs) used in self-management interventions for stroke survivors, to explore how they guide interventions, and evaluate their effectiveness on self-management behaviors and self-efficacy. MethodsPubMed, Embase, Web of Science, ProQuest Health & Medical Collection and the Cochrane Library were searched from inception to July 15, 2025. Randomized controlled trials or quasi-experimental studies evaluating theory-based self-management interventions for stroke survivors were included. Two reviewers independently screened studies, extracted data, and assessed risk of bias (Cochrane RoB 2.0). Meta-analyses were performed using random-effects models. ResultsFrom 11,495 records, 32 studies with 3,212 participants were included. Sixteen distinct TMFs were identified; self-efficacy theory was most frequent (13/32), followed by social cognitive theory (6/32). All TMFs were middle-range theories. Meta-analysis showed TMFs-based interventions significantly improved self-management behaviors (SMD = 4.26, 95%CI: 0.20-8.31, I{superscript 2} = 98.2%) and self-efficacy (SMD = 0.60, 95%CI: 0.32-0.88, I{superscript 2} = 72.8%). However, the effect for behaviors is likely inflated due to extreme heterogeneity and theoretical diversity. Theory-specific analysis of self-efficacy theory (k = 8) confirmed significant effects on self-efficacy (SMD = 0.64, 95%CI: 0.21-1.08). ConclusionsThis review identified 16 distinct theoretical models; self-efficacy theory was most frequently applied, followed by social cognitive theory. Theory-based interventions significantly improved self-management behaviours and self-efficacy.
Chorney, W.; Lisi, M.
Show abstract
BackgroundPostoperative delirium (POD) is a common complication of surgery. It is associated with a number of detrimental effects, including mortality and healthcare costs. We sought to determine whether common comorbidity indices are predictors of POD. MethodsUsing the Medical Information Mart for Intensive Care (MIMIC)-IV database, we identified 8022 abdominal surgery procedures across 7212 adult patients. We calculated both the Charlson comorbidity index (CCI) and the Elixhauser comorbidity index (ECI) for each procedure and used logistic regression to predict postoperative delirium, which was defined as delirium within 30 days following the procedure. ResultsModels based on either the CCI and ECI were predictive of postoperative delirium (area under the receiver-operator characteristic curve (AUC-ROC) of 0.622 and 0.652, respectively). However, the addition of other factors known to be associated with delirium improved model performance (AUC-ROC of 0.680). ConclusionsBoth the CCI and ECI are predictors of postoperative delirium in patients undergoing abdominal surgery. Addition of factors known to be associated with delirium renders additional predictive value and should be included in models that predict postoperative delirium.
Boldbaatar, A.; Moullaali, T. J.; MacRaild, A.; Risbridger, S.; Hosking, A.; Richardson, C.; Clay, G. A.; Dennis, M.; Sprigg, N.; Barber, M.; Parry-Jones, A. R.; Weir, C. J.; Werring, D. J.; Salman, R. A.-S.; Samarasekera, N.
Show abstract
Background: Platform trials are an efficient trial design which enable testing of multiple interventions simultaneously. They could advance knowledge of treatments for intracerebral haemorrhage (ICH). We aimed to investigate the views of clinicians involved in stroke research on recruitment to a future platform trial for ICH. Methods: Between April and July 2025, we conducted a UK-wide online survey of clinicians actively involved in stroke research using convenience sampling through professional organisations. Participants considered factors related to the consent process and research environment and could provide optional free text responses about additional barriers or facilitators to recruitment. We used descriptive statistics for quantitative data and content analysis for qualitative data. Results: Among 73 respondents, 46 (63%) were female, 36 (50%) were stroke physicians, 24 (34%) nurses, 6 (8%) allied health professionals, and 7 (10%) were in other roles. 36 (49%) had >20 years of clinical experience, 45 (61%) reported spending <10% of their role in research. 66 (91%) thought that a platform trial would be a good option for testing interventions for patients with stroke due to ICH. Across 11 modifiable factors, clinicians most frequently rated perceived importance of the research question as a facilitator of recruitment (94%), while clinician preference for specific treatments was most frequently rated as a barrier (48%). Two themes emerged from free text responses: study design and infrastructure. Regarding study design respondents perceived consent procedures (n=9), study materials (n=8), study procedures (n=8), eligibility assessment (n=6), the research question (n=3) and randomization (n=3) as important for a future platform trial. Regarding infrastructure, emergent factors were staffing (n=17), local research culture and capacity (n=9), research governance and delivery (n=6), and training (n=6). Conclusion: The overwhelming majority of respondents from the UK clinical stroke community supported a platform trial for ICH, although the influence of survey responder bias is unknown.
Fraser, J. J.; Zouris, J. M.; Hoch, J. M.; Sessoms, P. H.; MacGregor, A. J.; Hoch, M. C.
Show abstract
IntroductionMusculoskeletal injuries (MSKIs) are ubiquitous in the U.S. military, especially among high-performing service members such as Marines. Given that female service members only started to be assigned to ground combat roles since December 2015, evaluation of sex on MSKI risk in ground combat occupations has not been possible until there was an ample population to study. The purpose of this population-level epidemiological study was to assess (1) if female sex was a salient risk factor for MSKI in Marines serving in different military occupations, including combat arms, and (2) the effects of integration period on MSKI risk among female Marines. Materials and MethodsA population-based epidemiological retrospective cohort study of all U.S. Marines was performed assessing female sex, occupation, and integration period on the prevalence of MSKI from 2011 through 2020. The Military Health System Data Repository was utilized to identify initial healthcare encounters for diagnosed ankle-foot, knee, lumbopelvic-hip, thoracocostal, cervicothoracic, shoulder, elbow, or wrist-hand complex injuries. Prevalence was calculated for female and male Marines in each occupational category (combat, combat support, aviators, aviation support, services) during the pre-integration (2011-2015) and post-integration (2016-2020) periods. ResultsDuring the pre-integration period, 520/1,000 female Marines (n=13,985) and 299/1,000 male Marines (n=142,158) incurred MSKIs. In the post-integration period, the prevalence increased to 565/1,000 female Marines (n=17,608) and 348/1,000 male Marines (n=161,429). In the multivariable evaluation of sex, occupation, integration period, and the interaction of sex and occupation on combined MSKIs, only female sex was a significant factor for injury (prevalence ratio [PR]=1.99), with service in ground combat and aviation occupations identified as protective factors when compared with services occupations (PR=0.69). When these same factors were evaluated for specific MSKI outcomes, female sex remained a robust factor in all lower quarter (PR=1.75-2.63) and upper quarter (PR=1.38-2.36) injuries except for shoulder injuries. Service in ground combat and aviation occupations was protective for all lower quarter injuries (PR=0.46-0.71). In the upper quarter, ground combat was protective for all injuries except for elbow injuries (PR=0.67-0.77). Serving as an aviator was a risk factor for cervicothoracic (PR=1.57) and thoracocostal (PR=1.22) injuries and a protective factor for shoulder (PR = 0.73) and wrist-hand (PR = 0.46) injuries. Adjusted risk for lumbopelvic-hip (PR=1.13), ankle-foot (PR=1.53), cervicothoracic (PR=1.19), thoracocostal (PR=1.14), and elbow (PR=1.48) injuries significantly increased during the post-integration period. There was a significant sex-by-period interaction for shoulder injuries alone, with female sex in the post-integration epoch found to be salient (PR=1.26). ConclusionsFemale sex was a salient factor for MSKI, with service in ground combat and aviation occupations identified as protective factors when compared with services occupations. In the evaluation of specific MSKIs, female sex remained a robust and significant factor in all lower quarter injuries and upper quarter injuries except for shoulder injuries. There was only a significant sex-by-period interaction for shoulder conditions, with an increased risk of these injuries in female Marines in the post-integration period.
Chorney, W.; Lisi, M.
Show abstract
BackgroundPostoperative delirium is a common complication in surgical patients, and is associated with a multitude of negative outcomes, including mortality, dementia, and increased healthcare costs. Therefore, a better understanding of what factors contribute to postoperative delirium, especially those that can be easily obtained, is important. MethodsWe conducted a retrospective cohort study using patients from the Medical Information Mart for Intensive Care (MIMIC)-IV database. Adult patients undergoing procedures in abdominal surgery who did not have pre-existing delirium were included in the study. Overall, we included 8022 procedures across 7212 patients. For each admission, we extracted values obtained from common blood tests, the Charlson and Elixhauser comorbidity score, and patient demographic information. We used stepwise logistic regression to identify predictive factors of postoperative delirium in this cohort. ResultsThe model isolated factors well known to be associated with postoperative delirium, such as age, comorbidity (as represented by the Elixhauser comorbidity score), and Parkinsons disease. The model also selected variables that are less studied, such as minimum preoperative platelets and maximum preoperative sodium levels. We hypothesize that the former is associated with postoperative delirium as a surrogate marker for inflammation as an acute phase reactant, and the second due to it being a marker for cerebral edema and altered neurotransmission. ConclusionPreoperative blood tests contain valuable information that can be used alongside patient demographics and past medical history to better predict the risk of postoperative delirium.
Cai, C.; Horm, D.; Fuhrman, B.; Van Pay, C. K.; Zhu, M.; Shelton, K.; Vogel, J.; Xu, C.
Show abstract
Abstract This protocol is reported in accordance with the SPIRIT 2025 guidelines for clinical trial protocols. Introduction: Young children, from birth to age 5 y are particularly vulnerable to indoor air pollutants and respiratory pathogens. Portable air purifiers (or filtration) and upper-room ultraviolet germicidal irradiation (UVGI) are two widely used interventions with the potential to improve indoor air quality (IAQ) and reduce sick-related absences. However, a review of the literature revealed no real-world randomized studies evaluating their effectiveness in reducing young children's sick-related absences in early care and education (ECE) classrooms. Methods and Analysis: The OK-AIR study is a longitudinal, cluster-randomized 2x2 factorial trial conducted in Head Start centers using two implementation cohorts: Cohort 1 (five Head Start centers and 20 classrooms from 2023 to 2024) and Cohort 2 (11 centers and 59 classrooms from 2025 to 2026), with expanded inclusion of rural areas. Cohort 1 enrolled 204 children, 48 teachers and 5 site directors, and Cohort 2 enrolled 462 children, 97 teachers and 11 site directors. Within each center, four classrooms are randomized to: (1) control; (2) portable filtration; (3) upper-room ultraviolet germicidal irradiation (UVGI); or (4) both interventions. Cohort 2 was initially planned as a second factorial trial but was amended to a purifier-only design due to funding changes; details are provided in the protocol amendments section. We collect continuous IAQ data, including particulate matter (PM) with aerodynamic diameters [≤]1 m (PM1), [≤]2.5 m (PM2.5), [≤]4 m (PM4), and [≤]10 m (PM10); total volatile organic compounds (TVOCs) index; nitrogen oxides (NOx) index; carbon monoxide (CO), noise; temperature; and relative humidity, alongside daily child absences. Seasonal environmental surface swabs (dining tables and toilet flooring) are tested by Reverse-Transcriptase quantitative Polymerase Chain Reaction (RT-qPCR) for Influenza A/B, Respiratory Syncytial Virus (RSV), Human Parainfluenza Virus Type 3 (HPIV3), Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), and Norovirus. IAQ monitoring is structured across Winter, Spring, Summer, and Fall, including designated baseline/off-period weeks to characterize temporal and seasonal variability in environmental measures across classrooms and centers. Multi-informant surveys (Director, Teacher, Parent) capture contextual factors, and children's social-emotional development is assessed using teacher ratings on the Devereux Early Childhood Assessment (DECA). The primary outcome is the sick-related absence rate, analyzed as cumulative absences over the attendance year while accounting for clustering by school and classroom using generalized mixed-effects models. Secondary outcomes include children's social-emotional ratings, IAQ metrics and pathogen detection rates; analyses of IAQ incorporate time/seasonal structure, and season-stratified absenteeism analyses will be treated as secondary/exploratory refinements. An economic evaluation will estimate incremental intervention costs and cost-effectiveness/cost-benefit (such as cost per sick-related absence day averted). Ethics and Dissemination: This study was approved by the Institutional Review Board (IRB) at the University of Oklahoma. Findings will be shared through peer-reviewed publications; presentations at local, state, and national conferences; research briefs developed for lay and policy audiences; and community briefings prioritizing the participating early childhood programs and communities. ISRCTN Trial Registration: ISRCTN78764448 Disclaimer: The views expressed are those of the authors and do not reflect the official views of the Uniformed Services University or the United States Department of War. Strengths and Limitations of This Study: {middle dot} Real-world longitudinal cluster RCT: The study uses a rigorous longitudinal cluster-randomized 2x2 factorial design in real-world ECE settings. {middle dot} Combined interventions: Interventions target both air filtration and disinfection, allowing for combined and comparative evaluation. {middle dot} Objective air quality monitoring: Continuous monitoring of IAQ metrics provides objective and reliable data on environmental change. {middle dot} Environmental pathogen surveillance: qPCR on surface swabs yields an objective biological outcome to triangulate with IAQ and absences. {middle dot} Comprehensive context and child measures: Multi-method and multi-reporter data collection includes Head Start attendance records, continuous air monitoring, pathogen detection, contextual surveys completed by center directors, teachers, and parents, and standardized social-emotional assessments (DECA) completed by classroom teachers. Head Start program records providing children's longer-term health data available through Health Insurance Portability and Accountability Act (HIPAA) authorization. {middle dot} Clustered/temporal complexity: Seasonal design accounts for variation over time but may introduce complexity in modeling temporal effects. {middle dot} Practical Implications: Study findings will have practical implications for Head Start and other ECE programs striving to maximize child attendance with cost effective strategies. Keywords: Early childhood; Head Start; indoor air quality (IAQ); air purifiers; filtration; ultraviolet germicidal irradiation; cluster randomized trial; absenteeism; environmental pathogens; DECA; cost-benefit analysis
Kang, C.-Q.; Chen, L.-P.; Wang, Y.-X.
Show abstract
BackgroundEarly laparoscopic cholecystectomy (ELC) is the standard treatment for acute calculous cholecystitis (ACC), but difficult laparoscopic cholecystectomy (DLC) remains a challenge. Predicting DLC and ACC severity is crucial for clinical decision-making. MethodsThis retrospective single-center study included 198 ACC patients who underwent ELC. Preoperative clinical, laboratory, and imaging data were analyzed. DLC was defined by operative time >90 min, conversion, or subtotal cholecystectomy. ACC severity was graded using TG18. Multivariate logistic regression identified independent predictors. ResultsDLC occurred in 81 (40.9%) patients; 102 (51.5%) had severe ACC. Serum cholinesterase (ChE) and CRP were independent predictors of DLC. CRP and male sex independently predicted ACC severity. Other markers (e.g., NLR, PCT) were not independently associated. ConclusionPreoperative ChE and CRP levels are reliable predictors of DLC, while CRP and male sex predict ACC severity. These findings support their use in risk stratification and surgical planning.
Costa-Santos, C.; Vidal, R.; Lisboa, S.; Vieira-de-Castro, P.; Monteiro, A.; Duarte, I.
Show abstract
Compassion fatigue is a well-documented hazard among healthcare and veterinary professionals, yet the psychological toll on informal caregivers of feral cat colonies, likely numbering several tens of thousands in Portugal, remains largely unexplored. This cross-sectional study examines internal and external factors associated with the secondary traumatic stress component of compassion fatigue among 172 informal caregivers in Portugal. Secondary traumatic stress refers to work-related secondary exposure to individuals who have experienced extremely stressful or traumatic events. Structured telephone interviews assessed sociodemographics, colony management, compassion satisfaction, resilience, spiritual well-being, and perceived social support. Univariate and multivariable linear regression identified predictors of compassion fatigue. Results indicate that 47% of participants experienced moderate compassion fatigue, and 10% reported high levels. Multivariable analysis revealed that caring for large colonies (more than 25 cats) and being unemployed were significantly associated with higher fatigue. Conversely, older age, higher perceived family support, and the resilience dimension of serenity served as protective factors. Interestingly, finding meaning in life was positively correlated with fatigue, suggesting that caregivers who perceive their role as central to their life purpose may become more emotionally invested, increasing vulnerability to distress when unable to help animals. Official colony registration and formal institutional support did not significantly alleviate fatigue. These findings highlight that institutional support alone is insufficient to mitigate fatigue among informal caregivers, who experience significant distress driven by both practical burdens and profound emotional involvement. The most frequently reported concern among caregivers was the inability to cover the costs of feeding and veterinary care for the cats. Interventions must address both external needs (e.g., support to cover veterinary and feeding expenses for the cats) and internal coping mechanisms. Implementing psychosocial support alongside trap-neuter-return programs may also improve caregiver well-being and foster sustainable urban feral cat management. This underscores a One Health perspective, demonstrating that animal health is closely interconnected with human well-being and environmental health.
Guyett, A.; Dunbar, C.; Lovato, N.; Nguyen, K.; Bickley, K.; Nguyen, P.; Reynolds, A.; Hughes, M.; Scott, H.; Adams, R.; Lack, L.; Catcheside, P.; Pinilla, L.; Cori, J.; Howard, M.; Anderson, C.; Stevens, D.; Bensen-Boakes, D.-B.; Montero, A.; Stuart, N.; Vakulin, A.
Show abstract
BackgroundProlonged wakefulness, restricted sleep, and circadian factors can impact driving performance and road safety. Currently, there are no effective objective roadside tests to detect the state of drivers sleepiness during or prior to driving, or predict future driving impairment risk. This paper reports on an extended wakefulness protocol used to determine if a portable virtual reality device to administer vestibular-ocular motor function (VOM) tests can effectively detect 1) drivers state of sleepiness during or just prior to driving, and 2) predict trait sleepiness and future driving risk. MethodsFifty healthy adults with regular sleep within 9pm to 8am were recruited for an experimental laboratory procedure which involved two phases: an initial overnight sleep study, and a subsequent period of extended wakefulness lasting ~29 hours. During the wakefulness phase, participants undertook neurobehavioural testing, a simulated driving test, and repeat assessments of VOM to establish if ocular markers can predict sleepiness state and sleepiness-related performance impairments (Trial registry ACTRN12621001610820). DiscussionThis protocol outlined a study that aimed to establish the sensitivity of VOM test the effects of extended wakefulness and circadian phase on driver state and trait sleepiness and subsequent sleepiness-related driving impairment. Furthermore, the protocol aims to define the best VOM predictors to identify driver sleepiness state (road side testing and pre-drive assessments) and sleepiness trait (predicting future driving risk) to establish proof of concept for its potential application as a roadside, pre-drive and general sleepiness related fitness to drive test.
Sarang, S.; Matingo-Mutava, E.; Cassim, N.
Show abstract
BackgroundThe COVID-19 pandemic required South African public sector HIV viral load (VL) laboratories to scale up Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) testing while maintaining essential HIV services. This placed additional pressure on diagnostic services. This dual mandate introduced significant occupational and environmental challenges (OEC) for staff that remain underexplored. ObjectiveThis study aimed to investigate the OEC and effects that staff experienced during the implementation of COVID-19 testing at public sector VL laboratories in South Africa. MethodsA quantitative, cross-sectional study utilised a census approach among technical and support staff. Data were collected via a structured REDCap questionnaire using 5-point Likert scales. Pre- and post-implementation challenges were assessed across four domains: workload, environmental conditions (space, ventilation, waste), communication, and PPE availability. Statistical analyses included the Wilcoxon Signed-Rank and Spearmans correlation tests. ResultsPerceived occupational challenges increased significantly across all domains post-implementation. Staff workload saw the highest rise (mean score 3.02 to 3.53). Adverse health effects were pervasive; 80.2% of staff reported burnout/fatigue, and 76.5% reported increased anxiety/stress. A strong positive correlation was observed between post-COVID-19 challenges and adverse mental and physical health outcomes (rho = 0.449, p < 0.001). Furthermore, 35.8% of staff considered resigning due to increased job demands. ConclusionIntegrating COVID-19 testing exacerbated systemic weaknesses, causing measurable psychological injury and threatening workforce retention. Findings suggest that the diagnostic workforce requires formal crisis surge staffing models and institutionalised mental health support to safeguard personnel and maintain essential services during future health emergencies.
Wolosker, M. B.; Tedde, M. L.; Noro Hamilton, N.; Wolosker, N.; Schmidt Aguiar, W. W.; da Costa Ferreira, H. P.; Westphal, F. L.; Rodrigues Lima, A. M.; de Oliveira, H. A.; L F Pereira, S. T.; de Oliveira Riuto, F.; C Resende, G.; Krum Brenner, M. M.; Bonomi, D. d. O.; Brero Valero, C. E.; pego fernandes, P. m.
Show abstract
2- AbstractO_ST_ABSOBJECTIVEC_ST_ABSTo compare, in a Brazilian population, the clinical efficacy and quality-of-life (QoL) impact of one-stage bilateral thoracic sympathectomy (BTS) versus unilateral sympathectomy on the dominant side (UniS), with additional analysis of patients who later underwent contralateral surgery (two-stage bilateral, 2stS). METHODSProspective, randomized, controlled, multicenter trial (11 centers) including 163 adults with primary palmar hyperhidrosis. Participants were randomized 1:1 to BTS or UniS. From 6 months onward, UniS patients could elect contralateral sympathectomy (2stS). Sweating severity was assessed using the Hyperhidrosis Disease Severity Scale (HDSS) across 18 anatomical sites at each visit. Compensatory sweating (CS) was defined as new sweating in previously unaffected areas (preoperative HDSS = 1) and graded by the magnitude of HDSS increase. QoL was measured with two complementary validated instruments: HidroQOL and the Horn questionnaire. RESULTSBaseline characteristics were similar between groups, with most participants presenting severe preoperative disease. Improvement in the operated (dominant) hand was comparable after BTS and UniS, whereas control of the non-operated hand favored BTS. In the UniS group, spontaneous contralateral improvement occurred in approximately one-seventh of untreated hands. The proportion of patients without CS was similar in both groups ([~]25%), but severe CS was more frequent after BTS (40.4% vs 21.0%, p = 0.0344). QoL improved in both groups, with larger and more sustained reductions in Horn and HidroQOL scores after BTS (p < 0.001). In the 2stS subgroup, contralateral surgery produced a consistent HDSS decrease and marked QoL improvement, with predominantly mild additional CS. CONCLUSIONSBTS provides more complete symptom control and greater QoL improvement, but at the cost of more severe CS. UniS offers excellent control on the treated side, may reduce severe CS, and supports a staged strategy in which some patients avoid a second procedure (requested by 22.5% in this study); when needed, contralateral completion tends to restore additional clinical and QoL gains.
Kravos, A.; Dolenc, B.; Fartek, N.; Locatelli, I.; Cebron Lipovec, N.; Rogelj Meljo, N.; Kos, M.; Dobovsek, T.; Panter, G.
Show abstract
Iron deficiency (ID) is the most common nutritional deficiency worldwide, often caused by insufficient dietary intakes. Oral supplementation is one of the means to improve iron status. This study evaluated the efficacy and safety of two low-dose iron supplements - >Your< Iron Forte Capsules (YIFC) and Ferrous Sulfate Capsules (FSC) - in individuals with dietary ID. One hundred and one participants (mean age 30.6 years; 98% women) with low iron stores (mean serum ferritin 16.1 {micro}g/L) were randomized to receive either YIFC or FSC once daily for 12 weeks. Changes in blood indices and iron-related parameters were assessed at four and 12 weeks of intervention relative to baseline. The primary outcome was the change in hemoglobin (Hb) after 12 weeks. Eighty-seven participants completed the study. Both supplements significantly increased Hb at 12 weeks (YIFC: mean 6.52 g/L, p<0.001; FSC: mean 5.71 g/L, p<0.001). Product-related adverse events (AEs) were few (17% of all AEs) and of mild to moderate intensity only. One participant receiving FSC withdrew due to a probable product-related AE. The frequencies of product-related AEs were similar between study arms, however, statistically significantly more AEs judged to be definitely related to the product occurred in in the FSC arm. While product-related AEs were confined to the gastrointestinal tract in the YIFC arm, they affected multiple organ systems in the FSC arm. Supplementation with either YIFC or FSC proved as an effective, well-tolerated, and safe strategy for improving iron status in non-anemic dietary iron deficiency. In terms of the AE profile, supplementation with YIFC may offer advantages over supplementation with FSC.
Silburn, A.
Show abstract
BackgroundHelmet use is a proven safety measure that reduces the risk of head injury among cyclists and e-scooter riders. Despite legal requirements for pedal bikes and e-bikes in Australia, compliance varies, particularly among users of electric vehicles. The growing popularity of e-bikes and e-scooters in urban areas presents new public health challenges, yet observational data on helmet use, behavioural determinants, and the effectiveness of safety interventions remain limited. AimPhases 1 and 2 aim to assess helmet use among e-bike, pedal bike, and e-scooter riders in Canberra, and evaluate the impact of health-benefit and legal-penalty signage on compliance. MethodsThis study employs a multi-phase, quasi-experimental observational design across three urban bike paths in Canberra. Phase 1 (Baseline): Helmet use will be recorded via discreet video surveillance, capturing vehicle type, estimated age group, gender presentation, and weather conditions. Phase 2 (Intervention): Two sites will receive signage emphasising either safety benefits or legal penalties, while a third site serves as a control; post-intervention observations will assess changes in helmet compliance. Expected ResultsBaseline helmet use is expected to be higher among pedal bike riders than e-bike and e-scooter riders. Signage interventions are anticipated to increase compliance, with potential variation by message type, vehicle type, and rider demographics. Trial RegistrationAustralian and New Zealand Clinical Trials Registry (ANZCTR) [ACTRN12626000245392]
Park, S. A.; Kim, H. Y.
Show abstract
This systematic review and meta-analysis aimed to evaluate the effectiveness of relaxation interventions on anxiety, depression, stress, and quality of life in women with infertility. A comprehensive search of PubMed, OVID MEDLINE, CINAHL(R), Google Scholar, and Korean databases was conducted for articles published through March 2025. Keywords included combinations of terms related to infertility, ART, and nursing or psychotherapeutic interventions. The search identified 759 records, of which 13 met the eligibility criteria. Methodological quality was assessed using the Cochrane Risk of Bias tool, and data analysis was performed using R software (version 4.3.2). The meta-analysis included 10 randomized controlled trials (RCTs) and three non-randomized controlled trials (NRCTs), comprising 1,215 women undergoing ART. Intervention groups received relaxation programs, while comparison groups received usual care or no intervention. Relaxation interventions were associated with significant reductions in anxiety (Hedges g = -0.69) and depression (Hedges g = -0.38), and significant improvements in quality of life (Hedges g = 0.25). No statistically significant effect was observed for stress (Hedges g = -0.01; 95% CI: -0.49 to 0.47). Heterogeneity and risk of publication bias were determined to be low. Overall, relaxation programs demonstrated beneficial effects on anxiety, depression, and quality of life, but not on stress levels. Relaxation interventions appear to support the psychological well-being of women undergoing ART, with particular benefit for women with a history of repeated treatment failure. Individualized, woman-centered approaches may be more responsive to the needs of this population than universal or group-based models of care.
Rakhshanda, S.; Jonnagaddala, J.; Liaw, S.-T.; Rhee, J.; Rye, K.-A.
Show abstract
ObjectiveThe objective of this study was to explore the predictors of statin intolerance in the primary and secondary prevention of CVD among patients in the first two years after the date of first prescription using real-world data. MethodsThis study used the Electronic Practice Based Research Network Linked Dataset. An algorithm, which considered the muscle symptoms and creatinine kinase of patients, was used to identify statin intolerant patients. The R software was used for all analyses. Descriptive and multivariate logistic regression analyses were performed along with sensitivity analysis which was done using the Akaike Information Criterion model selection method. ResultsOverall, 4,016 patients accounting for 60,873 visits met the selection criteria. About 3.5% of the patients were statin intolerant. After adjusting for all other variables, statin intolerance was positively associated with gender (AOR 1.5, 95% CI 1.0 - 2.2), SEIFA index (AOR 3.8, 95% CI 2.3 - 6.7), employment status (AOR 2.4, 95% CI 1.1 - 5.7), and comorbidities (AOR 7.0, 95% CI 2.2 - 19.0). A similar direction of associations was seen for the exposures of the model from the sensitivity analysis and the regression model. However, since the unrecorded employment status showed a positive association, the sensitivity analysis suggests that the relationship may be influenced by residual confounding or information bias, indicating that this finding should be interpreted with caution. ConclusionStatin intolerance within the diverse community represented in the dataset is driven by gender, employment status, area-based social advantage and disadvantage index, and comorbidities.
Lee, D. C. W.; O'Brien, K. M.; Presseau, J.; Yoong, S.; Lecathelinais, C.; Wolfenden, L.; Thomas, J.; Arno, A.; Hutton, B.; Hodder, R. K.
Show abstract
BackgroundSystematic reviews are important for informing public health policies and program selection; however, they are time- and resource-intensive. Artificial intelligence (AI) offers a solution to reduce these labour-intensive requirements for various aspects of systematic review production, including data extraction. To date, there is limited robust evidence evaluating the accuracy and efficiency of AI for data extraction. This study within a review (SWAR) aimed to determine whether human data extraction assisted by an AI research assistant (Elicit(R)) is noninferior to human-only data extraction in terms of accuracy (i.e. agreement) and time-to-completion. Secondary aims included comparing error types and costs. MethodsA two-arm noninferiority SWAR was conducted to compare AI-assisted and human-only data extraction from 50 RCTs chronic disease interventions. Participants were randomised to extract all data required for conducting a review, using either the AI-assisted or human-only method. Accuracy was assessed using a three-point rubric by an independent assessor blinded to group allocation, based on agreement between extracted data and the assessor. Accuracy scores were standardized to a 0-100 scale. Analysis included overall and subgroup accuracy (data group and data type) using paired t-tests. Time-to-completion was self-reported by data extractors. Type of errors were coded by type and severity, and costs were calculated for data extraction, preparation of files, training and the Elicit(R) Pro subscription. ResultsThere was no difference in overall accuracy between the AI-assisted and human-only arms (mean difference (MD) 0.57 (on a 0-100 scale), 95% confidence interval (CI) -1.29, 2.43). Subgroup analysis by data group found AI-assisted to be more accurate than human-only data extraction for data variables describing intervention and control group (MD 4.75, 95% CI 2.13, 7.38), but otherwise no subgroup differences were observed. AI-assisted data extraction was significantly faster (MD 24.82 mins, 95% CI 18.80, 30.84). The AI-assisted arm made similar error types (missed or omitted data: AI-assisted 3.6%, human-only 3.4%) and severity (minor errors: AI-assisted 6.7%, human-only 6.5%) and cost $181.98 less than the human-only data extraction across the 50 studies. ConclusionAI-assisted data extraction using Elicit(R) showed noninferior accuracy, faster completion times, similar error types and severity, and lower costs compared to human-only extraction. These efficiency gains, without loss in accuracy suggest AI-assisted data extraction can replace one human-only data extractor in future systematic reviews of RCTs. Future research should explore different models of AI data extraction such as two AI-assisted extractors or AI-only extractor with human-only extractor, and comparison of AI-assisted to AI-only.
Luo, X.; Huang, H.; Xu, S.; Li, G.; Zhang, Y.; Luo, Y.; Kong, Q.; Liu, C.; Xie, Y.; Deng, G.; Wang, Y.; Ao, D.; Lan, L.; Yu, Y.; Tang, Z.; Wang, W.
Show abstract
BackgroundSuccessful recanalisation without functional independence is a frequent phenomenon following endovascular thrombectomy for large vessel occlusion stroke. AimTo demonstrate safety and efficacy of adjunct tirofiban therapy after endovascular thrombectomy in patients with anterior circulation large vessel occlusion stroke achieving successful recanalization defined as modified Thrombolysis In Cerebral Infarction (mTICI) 2b-3. DesignThe study of adjunct tirofiban treatment after successful endovascular thrombectomy recanalisation (ATTRACTION) is a multicenter, prospective, double-blind, randomized trial enrolling 1360 patients in China. Eligible patients will be randomised 1:1 to either the tirofiban or placebo group. OutcomeThe primary efficacy outcomes is assessed as the proportion of participants with a modified Rankin Scale (mRS) score of 0-2 at 90 days, and the primary safety outcome is symptomatic intracranial haemorrhage within 48 hours from randomisation. ConclusionThis study will provide evidence on the efficacy and safety of sequential tirofiban therapy after successful recanalisation in patients with anterior circulation large vessel occlusion stroke. Trial registration numberNCT06265051 WHAT IS ALREADY KNOWN ON THIS TOPICSuccessful recanalization without functional independence is a frequent phenomenon following endovascular thrombectomy and previous small-sample, retrospective studies supported the administration of adjunct tirofiban therapy in patients after endovascular thrombectomy achieving successful recanalization. WHAT THIS STUDY ADDSThe ATTRACTION trial aims to access the efficacy and safety of adjunct tirofiban therapy and the protocol describes the rationale and design of the trial. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICYATTRACTION trial will inform whether tirofiban therapy after successful recanalisation by endovascular thrombectomy can improve patient outcomes.
Rakhshanda, S.; Jonnagaddala, J.; Liaw, S.-T.; Rhee, J.; Rye, K.-A.
Show abstract
PurposeThe objective of this study was to identify predictors of statin adherence in the primary and secondary prevention of CVD among patients in the first two years after the date of first prescription using real-world data. MethodsThe Electronic Practice Based Research Network Linked Dataset was used in this study. Statin adherence was calculated using a modified proportion of days covered (PDC) formula. Individuals with PDC [≥] 80% during the two years of observation period were considered as adherent. All analyses were performed with R software. Descriptive and multivariate logistic regression analyses were performed. Sensitivity analysis was performed using the Akaike Information Criterion model selection method. ResultsOverall, 3,432 patients accounting for 57,227 visits met the selection criteria. The mean PDC was 91.6% ({+/-}22.2%), and 72.0% of the patients were adherent to statins (PDC [≥] 80%) in the first two years after the date of first prescription. After adjusting for all other variables, statin adherence was positively associated with age (AOR 1.7, 95% CI 1.4 - 2.0), SEIFA index (AOR 1.8, 95% CI 1.2 - 2.6), polypharmacy (AOR 1.8, 95% CI 1.3 - 2.5) and comorbidities (AOR 1.4, 95% CI 1.1 - 1.7), and negatively associated with the number of statin types (AOR 0.6, 95% CI 0.5 - 0.9) and smoking status (AOR 0.7, 95% CI 0.6 - 0.9). The sensitivity analysis showed similar results as the regression model. ConclusionsStatin adherence is influenced by an aging, multimorbid population, who are exposed to polypharmacy, multiple statin options and socioeconomic diversity. Key pointsO_LIAdherence in the first two years after the first date of statin prescription was measured as proportion of days covered (PDC) C_LIO_LIThe mean PDC was 91.6% ({+/-}22.2%) C_LIO_LI72.0% of the patients were adherent to statins, with PDC [≥] 80% C_LIO_LIStatin adherence was positively associated with age, area-based social advantage and disadvantage index, polypharmacy and comorbidities C_LIO_LIStatin adherence was negatively associated with the number of statin types prescribed to the patients and the smoking status of patients C_LI Plain Language SummaryThe objective of this study was to identify predictors of statin adherence among patients in the first two years after the date of first prescription using real-world data. The dataset used was the Electronic Practice Based Research Network Linked Dataset. Statin adherence was calculated using proportion of days covered (PDC). A PDC [≥] 80% during the two years of observation period were considered as adherent. Overall, 3,432 patients were eligible for this study, and 72.0% of them were adherent to statins in the first two years after the date of first prescription. Statin adherence was positively associated with age, area-based social advantage and disadvantage index, number of medicines taken by the patient and number of chronic conditions that the patient suffered. Moreover, statin adherence was negatively associated with the number of statin types prescribed to the patients and smoking status of patients.
Burdon, M. G.; Denson, S.; Tang, M.; Mellor, J.; Ward, T.
Show abstract
BackgroundWorking while sick (presenteeism) with an infectious disease contributes to the spread of infections and is detrimental to productivity. Respiratory illnesses are a common cause of sickness in the working population and understanding the prevalence of presenteeism linked to respiratory illness is therefore important. MethodsWinter Covid Infection Study (WCIS) panel members in work aged 18-64 were surveyed in February - March 2024 and asked about presenteeism in the previous 28 days. Multilevel regression and poststratification was used to estimate the prevalence and length of presenteeism and its effect on productivity in the English workforce, as approximated using the WCIS survey sample calibrated to census proportions. Differences by demographic groups and work sector were also analysed. ResultsAround one in six working adults in England worked while sick with a respiratory infection during the study period, and one in ten attended a non-home workplace. Overall, around one day per adult was spent working while sick with a respiratory infection, approximately half of which was non-home working. Respondents felt they were able to work at around three-quarters of their usual capacity while sick. Presenteeism was more common among respondents who were younger, White, worked in a hybrid pattern, lived in larger households, had Long COVID-19, or worked in teaching and education. ConclusionWorking while sick with a respiratory infection is relatively common, including among those who primarily work away from the home. Key messagesAround one in six working-age adults in employment worked while sick with a respiratory infection during the study period (Feb-Mar 2024). - The likelihood of working while sick with a respiratory infection varied by demographic group and work sector. - On average, survey respondents said they could work at around three quarters their normal effectiveness while sick with a respiratory infection.
Lloyd, S. J.; Stockley, R. C.
Show abstract
BackgroundDespite recommendations in clinical guidelines, clinical experience indicates that engagement with splints and orthotics varies amongst people after stroke. ObjectivesThe aim of the study was to understand the factors that influence engagement with splints and orthotics in people after stroke. MethodsPeople after stroke who had been wearing a splint or orthotic (also known as devices) for at least 2 months under the care of one Community Neurosciences Team in the UKs National Health Service were included. Semi structured interviews based on the constructs of Banduras Social Cognitive Theory (SCT) were used to gather participants views, and a framework analysis applying the constructs of SCT was completed using NVIVO software. ResultsFour key themes were identified: 1. Self-Regulation; difficulties applying the device and aesthetic acceptability. 2. Self-Efficacy; increased confidence when wearing the device and reduced motivation to wear the device. 3. Outcomes Expectation; reduced falls risk, improved gait, improved balance, maintaining range of movement, and negative effects such as discomfort, pain, itching. 4. Social Support; support needed to apply the device and the burden on family members/carers to apply the device correctly. ConclusionsThe findings of this study highlight key factors that influence engagement with orthotics and splints. These include difficulty applying the device after stroke, device aesthetics, comfort, and the importance of continued support from carers. Manufacturers should consider how people after stroke can independently don and doff devices. Education of carers and family members also appears key to support their engagement.